212 results
The Question of Culture: Giulio Preti's 1972 Debate with Michel Foucault Revisited
- Ian Hacking
-
- Article
- Export citation
-
Ian Hacking sets out a parallel between Michel Foucault's thought and that of Giulio Preti based on the debate between them that took place in 1971. This is the speech given at the award of the ‘Giulio Preti’ Prize in November 2008.
On the Reality of Existence and Identity
- Ian Hacking
-
- Journal:
- Canadian Journal of Philosophy / Volume 8 / Issue 4 / December 1978
- Published online by Cambridge University Press:
- 01 January 2020, pp. 613-632
-
- Article
- Export citation
-
“The confusion of a logical with a real predicate,” according to the Critique of Pure Reason, “is almost beyond correction” (A598/B626). Kant did not assert that existence is no predicate, but that it is only a “logical” one, and not a “real” one. Much the same thing has been said about identity, although Kant himself thought it is real and not logical. We have long lacked a rigorous criterion to distinguish real from logical predicates, and hence have not been able to say why the difference matters. This paper has two objects. First it provides a demarcation between real and logical predicates that confirms Kant's dictum that existence is only “logical.” Secondly it states the theory of a “logical” (but not “real”) relation of identity. Perhaps this is not the only identity relation. I show only that once it has been precisely defined in the right setting, there are definite answers to a number of disputed questions about identity. Maybe there are other concepts of identity for which different answers are to be given, but I shall not discuss that disagreeable prospect here. A third application concerns the ontological argument.
Bernard Williams Truth and Truthfulness: An Essay in Genealogy. Princeton: Princeton University Press 2002. Pp. xi + 328.
- Ian Hacking
-
- Journal:
- Canadian Journal of Philosophy / Volume 34 / Issue 1 / March 2004
- Published online by Cambridge University Press:
- 01 January 2020, pp. 137-148
-
- Article
- Export citation
4 - The long run
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 35-47
-
- Chapter
- Export citation
-
Summary
The logic for support will serve as the underlying logic for our definition of chance. So now we seek connexions between support and long run frequency. There is one connexion which has usually seemed too obvious to be worth the stating, but this whole chapter is devoted to it. It serves to bring out the rôle, or lack of rôle, of the concept of the ‘long run’ in statistical inference and in reasoning about chances. Statisticians have usually been interested in new discoveries, but a philosophical inquiry into foundations has to begin with what everyone else takes for granted.
The suitor of the last chapter guessed who wrote a letter; he knew only that one correspondent writes more often than another. It seems part of our very understanding of frequency in the long run, that if A happens more often than B, and one or the other must have happened, then the best guess is A. This sounds correct no matter what one means by the long run. It does not follow from Kolmogoroff's axioms. Should it be derived from another postulate? Must it be postulated on its own? Or is it just false?
The questions are academic. Seldom are frequencies our only data. The world is too complicated and men know too much. Only in the imagination do frequencies serve as the sole basis of action. But it is probable that laws governing imaginary cases operate in life.
Suppose an urn to contain many balls. Most are black; a few white. Experiments show that when a blindfold person draws a ball at random, replaces it, and shakes the urn before redrawing, black appears far more often than white. So suppose this is a chance set-up in which the chance of black greatly exceeds that of white, and in which the result of any draw is independent of any other. If you are to guess about the result of the next draw, or any other, then, on this slender information, your best guess is ‘black’. Why?
Black is best only if you want to guess correctly. I shall take this for granted in the sequel, but perhaps it is not a necessary assumption. Instead of guessing one may speak more formally.
5 - The law of likelihood
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 48-66
-
- Chapter
- Export citation
-
Summary
The unique case rule tries to state a connexion between frequency and support. In extending the idea it is useful to begin with a piece of reasoning based upon frequencies, but which the rule cannot validate. Imagine a textual critic editing a fragment from a Ciceronian manuscript. He knows the fragment is thirteenth century, but not whether it is faithful to its original. He must guess whether it comes from a reliable source or not. He notes a solecism of the sort seldom found in a classical author, but which all too often creeps in on the hand of a medieval copyist. On this slender data, he guesses his fragment is not to be trusted. Perhaps the inference can be schematized:
(1) every X is either Y or Z—every fragment that includes a solecism is either unreliable or reliable;
(2) the long run frequency of X’s among Y's is greater than that among Z's; therefore, lacking other information,
(3) the guess that this X is Y is better supported than the guess that this X is Z.
In contrast, the unique case rule validates inferences of the following sort:
(1) every X is either Y or Z;
(2) the long run frequency of Y's among X's is greater than that of Z's among X's; therefore, lacking other information,
(3) the guess that this X is Y is better supported than the guess that this X is Z.
Only the second premiss differs, but when schematized as above, it is plain that the inferences do differ. Yet the contrast need not be too great. We must not be entirely blinded by the beauties of logical form.
I shall argue that both inferences are validated by exactly the same fact about frequency and support. Perhaps it has never been denied that the unique case rule and the critic's guess have the same foundation, but I am sure most students assume the two are rather different. It is commonly believed that the rule is justified by long run success, but with the other sort of inference, most sober minds have balked at inventing a ‘long run’ of chance set-ups among which a man might be successful. So it has generally been supposed that the rule rests on different principles than the critic's guess.
9 - The fiducial argument
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 122-147
-
- Chapter
- Export citation
-
Summary
Much can be done with mere comparisons of support, but statisticians want numerical measures of the degree to which data support hypotheses. Preceding chapters show how much can be achieved by an entirely comparative study. Now I shall argue that our earlier analysis can sometimes provide unique quantitative measures. It is not yet certain that this conclusion is correct, for it requires a new postulate. However, this is interesting in its own right, and though the resulting measures will have a fairly narrow domain, their very existence is remarkable.
The following development is novel, but in order to declare its origins I call this chapter the fiducial argument. The term is Fisher's. No branch of statistical writing is more mystifying than that which bears on what he calls the fiducial probabilities reached by the fiducial argument. Apparently the fiducial probability of an hypothesis, given some data, is the degree of trust you can place in the hypothesis if you possess only the given data. So we can at least be sure that it is close to our main concern, the degree to which data support hypotheses.
Fisher gave no general instruction for computing his fiducial probabilities. He preferred to work through a few attractive examples and then to invite his readers to perceive the underlying principles. Yet what seem to be his principles lead direct to contradiction.
Despite this bleak prospect, the positive part of the present chapter owes much to Fisher, and can even claim to be an explication of his ideas. Fortunately the consistent core of his argument is a good deal simpler than his elliptic papers have suggested. There has already been one succinct statement of it: Jeffreys was able not only to describe its logic, but also to indicate the sort of postulate Fisher must assume for his theory to work. It is a shame that Jeffreys’ insights have not been exploited until now.
Logic
If this chapter does contribute either to understanding Fisher's principles or to the sound foundation of a theory of quantitative support, it will only be through careful statement of underlying assumptions and conventions. It is by no means certain what ought to be the basic logic of quantitative support by data.
8 - Random sampling
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 108-121
-
- Chapter
- Export citation
-
Summary
Many persons take inference from sample to population as the very type of all reasoning in statistics. It did lead to the naming of our discipline. ‘Statistics’ once meant that part of political science concerned with collecting and analysing facts about the state. ‘Statistical inference’ meant the mode of inference peculiar to that part of the science. The meaning has since been extended, but it is no verbal accident that ‘population’ is the name now given in statistics to a collection of distinct things which may be sampled.
What is the pattern of inference from sample to population? ‘Make a random sample of a population, assume in certain respects that it is typical of the whole, and infer that the proportion of E's in the whole population is about equal to the proportion in the sample.’ That would be the most naїve account. It is grossly inadequate, especially if the idea of a random sample were to remain undefined. In fact the process of inference from random sample to population is entirely rigorous. We must first attend neither to sample nor population, but to the sampling procedure. Any such procedure involves a chance set-up. Once the analysis of random sampling is complete, inference from sample to population follows as a trivial corollary of the theory of chance. In addition, most of the hoary riddles about randomness can be solved or else shown irrelevant to statistics.
Randomness
Three questions about randomness are to be distinguished, (1) What does the English word ‘random’ mean? Perhaps this can be answered briefly, but it would take 100 pages to prove any answer correct. I shall not try here. (2) What is the appropriate mathematical concept pertaining to infinite series, which is most similar to the ordinary conception of randomness? This problem has been definitively solved by Church, but we shall not require his results. (3) Which features of random samples are crucial to statistical inference? This is our question. We shall answer it in a sequence of distinct steps.
Random trials
It makes sense to describe trials on a set-up as random but in what follows I shall not do so because it is unnecessary. I take it that trials are called random if and only if they are independent. Hence it suffices to speak of independence.
3 - Support
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 25-34
-
- Chapter
- Export citation
-
Summary
Kolmogoroff's axioms, or some equivalent, are essential for a deep investigation of our subject, but do not declare when, on the basis of an experiment, one may infer the exact figure for the frequency in the long run, nor the distribution of chances on trials of some sort. They do not even determine which of a pair of hypotheses about a distribution is better supported by given data. Some other postulates are needed.
In what follows a statistical hypothesis shall be an hypothesis about the distribution of outcomes from trials of kind K on some set-up X. Our problems are various: given some data, which of several statistical hypotheses are best supported? When is a statistical hypothesis established, and when refuted? When is it reasonable to reject an hypothesis? We should also know what to expect from a chance set-up, if a statistical hypothesis is true.
Each question is important, and the correct answers, in so far as they are known, already have a host of applications. But the first question is crucial to the explication of chance. If no one can tell in the light of experimental data which of several hypotheses is best supported, statistical hypotheses are not empirical at all, and chance is no physical property. So our first task is to analyse the support of statistical hypotheses by experimental data.
It is common enough in daily life to say that one hypothesis is better supported than another. A man will claim that his version of quantum theory is better supported than a rival's, or that while there is little support in law for one person's title to his land, there is a good deal to be said for someone else's claim. Since analysis must begin somewhere, I shall not say much about the general notion of support. But I shall give a set of axioms for comparing support for different hypotheses on different data, and, very much later, axioms concerning the measurement of support-by-data. It is to be hoped that the common understanding of the English word ‘support’ plus a set of axioms declaring the principles actually to be used, will suffice as a tool for analysis.
7 - Theories of testing
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 81-107
-
- Chapter
- Export citation
-
Summary
Henceforth it will be assumed as proven that an adequate theory of testing must consider not only the statistical hypothesis under test, but also rivals to it. This may be common-sense: ‘don't reject something unless you've something better’. It involves a conception now becoming general in the philosophy of science, and which is currently striving to oust the former idea that an hypothesis could be rejected independently of what other theories are available.
Even if this kind of attitude be agreeable, it is far from evident how rival hypotheses are to be weighed. There may be, as has already been warned in the last chapter, radically different kinds of test. The sequel will show that controversy between classical theories of testing stems in part from their trying to answer different questions. The most famous theory, due to Neyman and Pearson, will be shown to have very limited application. For the present I shall continue to regard the central problem as that of tests which can imply, on the basis of experimental results, whether or not an individual hypothesis should be rejected. Tests thus divide possible experimental results into two classes, those which suffice to reject the hypothesis, and those which do not. The former class of results will be called the rejection class.
Likelihood tests
Our theory of support leads directly to the theory of testing suggested in the last chapter. An hypothesis should be rejected if and only if there is some rival hypothesis much better supported than it is. Support has already been analysed in terms of ratios of likelihoods. But what shall serve as ‘much better supported’? For the present I leave this in abeyance, and speak merely of tests of different stringency. With each test will be associated a critical ratio. The greater the critical ratio, the more stringent the test. Roughly speaking hypothesis h will be rejected in favour of rival i at critical level ∝, just if the likelihood ratio of i to h exceeds ∝.
Hence suppose we have data stating that the true distribution on trials of some kind is a member of the class ∆. By a simple statistical hypothesis we mean one of the form, ‘The true distribution is D’, where D will be in ∆.
13 - The subjective theory
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 192-210
-
- Chapter
- Export citation
-
Summary
The subjective theory of statistical inference descends from Bayes; its immediate progenitors are Ramsey and de Finetti; Savage is its most recent distinguished patron. It aims at analysing rational, or at least consistent, procedures for choosing among statistical hypotheses. In some ways reaching further than our theory of support, it is in a correspondingly less stable state. But the two theories are compatible. Some tenets of subjectivists conflict with the theory of support, but do not seem essential to subjectivism. Both theories combine elements of caution and ambition, but where one is bold, the other is wary. The present chapter will not contribute to the subjective theory, but will try to distinguish its province from the theory of statistical support.
The two theories
As presented in this book the theory of support analyses only inferences between joint propositions; essentially, inferences from statistical data either to statistical hypotheses, or else to predictions about the outcomes of particular trials. It attempts a rather precise analysis of these inferences, and in so doing is perhaps too audacious, but at least it proffers postulates rich enough in consequences that they can be tested and, one hopes, revised in the light of counterexamples. But in another respect the theory is timid, for it gives no account of how to establish statistical data. Thus although it may claim to encompass all that is precisely analysable in statistical inference, it cannot pretend to cover all the intellectual moves made by statisticians. It would defend itself against criticism on this score by maintaining that the way in which statistical data are agreed on—partly by experiment, partly by half-arbitrary, half-judicious simplification—is not peculiar to statistics and so should not seek an especially statistical foundation.
But the subjective theory embraces all work done in statistics, and perhaps all inductive reasoning in any field. At the same time it hesitates to appraise inferences made on the basis of particular statistical data. It is concerned only that inferences be consistent with each other. The theory's key concept has been variously called ‘subjective probability’, ‘personal probability’, ‘intuitive probability’, ‘probability’ and ‘personal betting rate’.
11 - Point estimation
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 160-174
-
- Chapter
- Export citation
-
Summary
Given some data our problem is to discover the best point estimate of a magnitude. For this purpose a good estimate will be what I have called a well-supported one: an estimate which the data give good reason to suppose is close to the true value. We assume that a scale of closeness is implied in the statement of the problem. We shall be concerned with the excellence of individual estimates, and of estimators making estimates for particular problems, rather than with comparing the average properties of estimators. We seek estimators which give well-supported estimates.
Only rather limited and feeble solutions are offered here. Perhaps there is no general answer to fundamental questions about estimation. Maybe estimation, as I have understood it in the preceding chapter, is only barely relevant to statistical inference. One ought, perhaps, to search for other, related, notions which are more fruitful. But first we must try to do our best with well-supported estimates of statistical parameters.
An historical note
A little history will hint at the flavour of chaos in estimation theory. Unfortunately few students have distinguished the property of being a well-supported estimate, from the property of being an estimate made by an estimator which is in some way close on the average. So the traditional estimators I am about to describe were seldom aimed at one or the other of these two properties, but if anything, at both at once.
Estimation in statistics is relatively recent; it followed two centuries of estimation in other fields. Most work did, however, make use of the doctrine of chances. The usual problem concerned measurement. Some quantity, such as the distance of a star or the melting point of a chemical, is measured several times; there are small discrepancies; now use the measurements to estimate the true value of the physical quantity. Another problem is curve fitting; try to fit a smooth curve to a number of different points all based on observations. This is both a problem of guessing the true curve, and of estimating the characteristics of that curve.
There were forerunners, but Laplace and Gauss are the great pioneers of what came to be called the theory of errors. Despite their many theoretical insights, the whole science remained pretty ad hoc until the 1920's. Many issues had to be settled which now are taken for granted.
Index
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 211-215
-
- Chapter
- Export citation
6 - Statistical tests
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 67-80
-
- Chapter
- Export citation
-
Summary
By rejecting a statistical hypothesis I shall mean concluding that it is false. On what statistical data should this be done? Braithwaite thought the matter so crucial that he tried to state the very meaning of ‘probability statements’ in terms of rules for their rejection. We shall examine his ideas later. First we must establish when evidence does justify rejection. To do so, it need not entail that the hypothesis is false. But what relations must it bear to the hypothesis?
Perhaps rejection covers two distinct topics. There have been many debates on this point, and it cannot be settled before further analysis. But a warning may be useful. An hypothesis may be rejected because of the evidence against it. This is my main subject. But situations can arise in which it is wise to reject an hypothesis even though there is little evidence against it. Suppose a great many hypotheses are under test. A good strategy for testing is one which rejects as many false and as few true hypotheses as possible. The best strategy might occasionally entail rejecting hypotheses even though there is little evidence against them. This sounds implausible, but examples will be given.
There is no general agreement on whether rejection should be studied in terms of evidence or strategies. I do not want to prejudge the issue. But I shall begin with examples in which an hypothesis should be rejected because of the evidence against it. I shall not begin with examples in which a great many similar hypotheses are under test. The logic of the two may be the same, for all that has been proved. But I shall not begin by assuming it.
The forthcoming discussion is, as usual, very academic. It concerns the relation of statistical hypotheses to statistical data. Generally one has all sorts of data bearing on an interesting statistical hypothesis, far more than merely statistical data. Hence one's problem is generally more complex than any to be discussed in this chapter. Here I deal only with data which may be precisely evaluated, and whose evaluation is peculiar to statistics.
10 - Estimation
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 148-159
-
- Chapter
- Export citation
-
Summary
Estimation theory is the most unsatisfactory branch of every school on the foundations of statistics. This is partly due to the unfinished state of our science, but there are general reasons for expecting the unhappy condition to continue. The very concept of estimation is ill adapted to statistics.
The theory of statistical estimation includes estimating particular characteristics of distributions and also covers guessing which possible distribution is the true one. It combines point estimation, in which the estimate of a magnitude is a particular point or value, and interval estimation, in which the estimate is a range of values which, it is hoped, will include the true one. Many of the questions about interval estimation are implicitly treated in the preceding chapter. A suitable interval for an interval estimate is one such that the hypothesis, that the interval includes the true values, is itself well supported. So problems about interval estimation fall under questions about measuring support for composite hypotheses. Thus the best known theories on interval estimation have been discussed already. The present chapter will treat some general questions about estimation, but will chiefly be devoted to defining one problem about point estimation. It is not contended that it is the only problem, but it does seem an important one, and one which has been little studied. When the problem has been set, the succeeding chapter will discuss how it might be solved. But neither chapter aims at a comprehensive survey of estimation theory; both aim only at clarifying one central problem about estimation.
Guesses, estimates, and estimators
An estimate, or at least a point estimate, is a more or less soundly based guess at the true value of a magnitude. The word ‘guess’ has a wider application than ‘estimate’, but we shall not rely on the verbal idiosyncracies of either. It must be noted, however, that in the very nature of an estimate, an estimate is supposed to be close to the true value of what it estimates. Since it is an estimate of a magnitude, the magnitude is usually measured along some scale, and this scale can be expected to provide some idea of how close the estimate is.
Contents
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp v-viii
-
- Chapter
- Export citation
Frontmatter
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp i-iv
-
- Chapter
- Export citation
12 - Bayes' theory
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 175-191
-
- Chapter
- Export citation
1 - Long run frequencies
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 1-11
-
- Chapter
- Export citation
-
Summary
The problem of the foundation of statistics is to state a set of principles which entail the validity of all correct statistical inference, and which do not imply that any fallacious inference is valid. Much statistical inference is concerned with a special kind of property, and a good deal of the foundations depends upon its definition. Since no current definition is adequate, the next several chapters will present a better one.
Among familiar examples of the crucial property, a coin and tossing device can be so made that, in the long run, the frequency with which the coin falls heads when tossed is about 3/4. Overall, in the long run, the frequency of traffic accidents on foggy nights in a great city is pretty constant. More than 95% of a marksman's shots hit the bull's eye. No one can doubt that these frequencies, fractions, ratios, and proportions indicate physical characteristics of some parts of the world. Road safety programmes and target practice alike assume the frequencies are open to controlled experiment. If there are sceptics who insist that the frequency in the long run with which the coin falls heads is no property of anything, they have this much right on their side: the property has never been clearly defined. It is a serious conceptual problem, to define it.
The property need not be static. It is the key to many dynamic studies. In an epidemic the frequency with which citizens become infected may be a function of the number ill at the time, so that knowledge of this function would help to chart future ravages of the disease. Since the frequency is changing, we must consider frequencies over a fairly short period of time; perhaps it may even be correct to consider instantaneous frequencies but such a paradoxical conception must await further analysis.
First the property needs a name. We might speak of the ratio, proportion, fraction or percentage of heads obtained in coin tossing, but each of these words suggests a ratio within a closed class. It is important to convey the fact that whenever the coin is tossed sufficiently often, the frequency of heads is about 3/4. So we shall say, for the present, that the long run frequency is about 3/4.
2 - The chance set-up
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp 12-24
-
- Chapter
- Export citation
-
Summary
Of what kind of thing is chance, or frequency in the long run, a property? Early writers may have conceived chances as properties of things like dice. Von Mises defines probability as the property of a sequence, while Neyman applies it to sets called fundamental probability sets. Fisher has an hypothetical infinite population in mind. But a more naïve answer stands out. The frequency in the long run of heads from a coin tossing device seems to be a property of the coin and device; the frequency in the long run of accidents on a stretch of highway seems to be a property of, in part, the road and those who drive upon it. We have no general name in English for this sort of thing. I shall use ‘chance set-up’. We also need a term corresponding to the toss of the coin and observing the outcome, and equally to the passage of a day on which an accident may occur. For three centuries the word ‘trial’ has been used in this sense, and I shall adopt it.
A chance set-up is a device or part of the world on which might be conducted one or more trials, experiments, or observations; each trial must have a unique result which is a member of a class of possible results.
A piece of radium together with a recording mechanism might constitute a chance set-up. One possible trial consists in observing whether or not the radium emits radiation in a small time interval. Possible results are ‘radiation’ and ‘none’. A pair of mice may provide a chance set-up, the trial being mating and the possible results the possible genetic make-ups of the offspring. The notion of a chance set-up is as old as the study of frequency. For Cournot, frequencies are properties of parts of the world, though he is careful not to say exactly what parts, in general. Venn's descriptions make it plain that he has a chance set-up in mind, and that it is this which leads him to the idea of an unending series of trials. Von Mises’ probability is a property of a series, but it is intended as a model of a property of what he calls an experimental set-up—I have copied the very word ‘set-up’ from his English translators.
Preface
-
- By Ian Hacking
- Ian Hacking
- Preface by Jan-Willem Romeijn
-
- Book:
- Logic of Statistical Inference
- Published online:
- 05 July 2016
- Print publication:
- 26 August 2016, pp xi-xii
-
- Chapter
- Export citation
-
Summary
This book analyses, from the point of view of a philosophical logician, the patterns of statistical inference which have become possible in this century. Logic has traditionally been the science of inference, but although a number of distinguished logicians have contributed under the head of probability, few have studied the actual inferences made by statisticians, or considered the problems specific to statistics. Much recent work has seemed unrelated to practical issues, and is sometimes veiled in a symbolism inscrutable to anyone not educated in the art of reading it. The present study is, in contrast, very much tied to current problems in statistics; it has avoided abstract symbolic systems because the subject seems too young and unstable to make them profitable. I have tried to discover the simple principles which underlie modern work in statistics, and to test them both at a philosophical level and in terms of their practical consequences. Technicalities are kept to a minimum.
It will be evident how many of my ideas come from Sir Ronald Fisher. Since much discussion of statistics has been coloured by purely personal loyalties, it may be worth recording that in my ignorance I knew nothing of Fisher before his death and have been persuaded to the truth of some of his more controversial doctrines only by piecing together the thought in his elliptic publications. My next debt is to Sir Harold Jeffreys, whose Theory of Probability remains the finest application of a philosophical understanding to the inferences made in statistics. At a more personal level, it is pleasant to thank the Master and Fellows of Peterhouse, Cambridge, who have provided and guarded the leisure in which to write. I have also been glad of a seminar consisting of Peter Bell, Jonathan Bennett, James Cargile and Timothy Smiley, who, jointly and individually, have helped to correct a great many errors. Finally I am grateful to R. B. Braithwaite for his careful study of the penultimate manuscript, and to David Miller for proof-reading.
Much of chapter 4 has appeared in the Proceedings of the Aristotelian Society for 1963–4, and is reprinted by kind permission of the Committee. The editor of the British Journal for the Philosophy of Science has authorized republication of some parts of my paper ‘On the Foundations of Statistics’, from volume xv.